squamous cell carcinoma
A filtering scheme for confocal laser endomicroscopy (CLE)-video sequences for self-supervised learning
Porsche, Nils, Müller-Diesing, Flurin, Banerjee, Sweta, Goncalves, Miguel, Aubreville, Marc
Confocal laser endomicroscopy (CLE) is a non-invasive, real-time imaging modality that can be used for in-situ, in-vivo imaging and the microstructural analysis of mucous structures. The diagnosis using CLE is, however, complicated by images being hard to interpret for non-experienced physicians. Utilizing machine learning as an augmentative tool would hence be beneficial, but is complicated by the shortage of histopathology-correlated CLE imaging sequences with respect to the plurality of patterns in this domain, leading to overfitting of machine learning models. To overcome this, self-supervised learning (SSL) can be employed on larger unlabeled datasets. CLE is a video-based modality with high inter-frame correlation, leading to a non-stratified data distribution for SSL training. In this work, we propose a filter functionality on CLE video sequences to reduce the dataset redundancy in SSL training and improve SSL training convergence and training efficiency. We use four state-of-the-art baseline networks and a SSL teacher-student network with a vision transformer small backbone for the evaluation. These networks were evaluated on downstream tasks for a sinonasal tumor dataset and a squamous cell carcinoma of the skin dataset. On both datasets, we found the highest test accuracy on the filtered SSL-pretrained model, with 67.48% and 73.52%, both considerably outperforming their non-SSL baselines. Our results show that SSL is an effective method for CLE pretraining. Further, we show that our proposed CLE video filter can be utilized to improve training efficiency in self-supervised scenarios, resulting in a reduction of 67% in training time.
- Europe > Germany > Bavaria > Lower Franconia > Würzburg (0.05)
- Europe > Germany > North Rhine-Westphalia > Cologne Region > Aachen (0.04)
Lung Cancer Classification from CT Images Using ResNet
Adekunle, Olajumoke O., Akinyemi, Joseph D., Ladoja, Khadijat T., Onifade, Olufade F. W.
Lung cancer, a malignancy originating in lung tissues, is commonly diagnosed and classified using medical imaging techniques, particularly computed tomography (CT). Despite the integration of machine learning and deep learning methods, the predictive efficacy of automated systems for lung cancer classification from CT images remains below the desired threshold for clinical adoption. Existing research predominantly focuses on binary classification, distinguishing between malignant and benign lung nodules. In this study, a novel deep learning-based approach is introduced, aimed at an improved multi-class classification, discerning various subtypes of lung cancer from CT images. Leveraging a pre-trained ResNet model, lung tissue images were classified into three distinct classes, two of which denote malignancy and one benign. Employing a dataset comprising 15,000 lung CT images sourced from the LC25000 histopathological images, the ResNet50 model was trained on 10,200 images, validated on 2,550 images, and tested on the remaining 2,250 images. Through the incorporation of custom layers atop the ResNet architecture and meticulous hyperparameter fine-tuning, a remarkable test accuracy of 98.8% was recorded. This represents a notable enhancement over the performance of prior models on the same dataset.
- Health & Medicine > Therapeutic Area > Pulmonary/Respiratory Diseases (1.00)
- Health & Medicine > Therapeutic Area > Oncology > Lung Cancer (1.00)
A Histological
These images were evenly split between cases diagnosed with adenocarcinoma of the lung and squamous cell carcinoma, representing the two most common sub-types in lung cancer. The images were scanned on an Aperio scanner at a resolution of 0 . Different classes used for conditioning were annotated digitally by a pathologist using an apple pencil with the instruction to clearly demarcate boundaries between tissue regions. The pathologist could choose from a list of 40 distinct annotation categories, aiming to cover all possible annotation requirements. All data handling was performed in strict accordance with privacy regulations and ethical standards, ensuring the protection of patient information at all times.
A Novel Recurrent Neural Network Framework for Prediction and Treatment of Oncogenic Mutation Progression
Parthasarathy, Rishab, Bhowmik, Achintya
Despite significant medical advancements, cancer remains the second leading cause of death, with over 600,000 deaths per year in the US. One emerging field, pathway analysis, is promising but still relies on manually derived wet lab data, which is time-consuming to acquire. This work proposes an efficient, effective end-to-end framework for Artificial Intelligence (AI) based pathway analysis that predicts both cancer severity and mutation progression, thus recommending possible treatments. The proposed technique involves a novel combination of time-series machine learning models and pathway analysis. First, mutation sequences were isolated from The Cancer Genome Atlas (TCGA) Database. Then, a novel preprocessing algorithm was used to filter key mutations by mutation frequency. This data was fed into a Recurrent Neural Network (RNN) that predicted cancer severity. Then, the model probabilistically used the RNN predictions, information from the preprocessing algorithm, and multiple drug-target databases to predict future mutations and recommend possible treatments. This framework achieved robust results and Receiver Operating Characteristic (ROC) curves (a key statistical metric) with accuracies greater than 60%, similar to existing cancer diagnostics. In addition, preprocessing played an instrumental role in isolating important mutations, demonstrating that each cancer stage studied may contain on the order of a few-hundred key driver mutations, consistent with current research. Heatmaps based on predicted gene frequency were also generated, highlighting key mutations in each cancer. Overall, this work is the first to propose an efficient, cost-effective end-to-end framework for projecting cancer progression and providing possible treatments without relying on expensive, time-consuming wet lab work.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- North America > Canada > Alberta (0.14)
- North America > United States > California > San Diego County > San Diego (0.04)
- (9 more...)
- Research Report > New Finding (0.67)
- Research Report > Experimental Study (0.46)
- Government > Regional Government > North America Government > United States Government (0.67)
- Health & Medicine > Therapeutic Area > Oncology > Carcinoma (0.47)
WSI-LLaVA: A Multimodal Large Language Model for Whole Slide Image
Liang, Yuci, Lyu, Xinheng, Ding, Meidan, Chen, Wenting, Zhang, Jipeng, Ren, Yuexiang, He, Xiangjian, Wu, Song, Yang, Sen, Wang, Xiyue, Xing, Xiaohan, Shen, Linlin
Recent advancements in computational pathology have produced patch-level Multi-modal Large Language Models (MLLMs), but these models are limited by their inability to analyze whole slide images (WSIs) comprehensively and their tendency to bypass crucial morphological features that pathologists rely on for diagnosis. To address these challenges, we first introduce WSI-Bench, a large-scale morphology-aware benchmark containing 180k VQA pairs from 9,850 WSIs across 30 cancer types, designed to evaluate MLLMs' understanding of morphological characteristics crucial for accurate diagnosis. Building upon this benchmark, we present WSI-LLaVA, a novel framework for gigapixel WSI understanding that employs a three-stage training approach: WSI-text alignment, feature space alignment, and task-specific instruction tuning. To better assess model performance in pathological contexts, we develop two specialized WSI metrics: WSI-Precision and WSI-Relevance. Experimental results demonstrate that WSI-LLaVA outperforms existing models across all capability dimensions, with a significant improvement in morphological analysis, establishing a clear correlation between morphological understanding and diagnostic accuracy.
- Asia > China > Jiangxi Province > Nanchang (0.04)
- Asia > China > Hong Kong (0.04)
- Asia > China > Guangdong Province > Shenzhen (0.04)
- (3 more...)
- Health & Medicine > Therapeutic Area > Oncology > Carcinoma (1.00)
- Health & Medicine > Therapeutic Area > Dermatology (1.00)
- Health & Medicine > Diagnostic Medicine (1.00)
- (2 more...)
Zero-Shot Whole Slide Image Retrieval in Histopathology Using Embeddings of Foundation Models
Alfasly, Saghir, Alabtah, Ghazal, Hemati, Sobhan, Kalari, Krishna Rani, Tizhoosh, H. R.
We have tested recently published foundation models for histopathology for image retrieval. We report macro average of F1 score for top-1 retrieval, majority of top-3 retrievals, and majority of top-5 retrievals. We perform zero-shot retrievals, i.e., we do not alter embeddings and we do not train any classifier. As test data, we used diagnostic slides of TCGA, The Cancer Genome Atlas, consisting of 23 organs and 117 cancer subtypes. As a search platform we used Yottixel that enabled us to perform WSI search using patches.
- North America > United States > Minnesota > Olmsted County > Rochester (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
Deep Neural Networks for Predicting Recurrence and Survival in Patients with Esophageal Cancer After Surgery
Zheng, Yuhan, Elliott, Jessie A, Reynolds, John V, Markar, Sheraz R, Papież, Bartłomiej W., group, ENSURE study
Esophageal cancer is a major cause of cancer-related mortality internationally, with high recurrence rates and poor survival even among patients treated with curative-intent surgery. Investigating relevant prognostic factors and predicting prognosis can enhance post-operative clinical decision-making and potentially improve patients' outcomes. In this work, we assessed prognostic factor identification and discriminative performances of three models for Disease-Free Survival (DFS) and Overall Survival (OS) using a large multicenter international dataset from ENSURE study. We first employed Cox Proportional Hazards (CoxPH) model to assess the impact of each feature on outcomes. Subsequently, we utilised CoxPH and two deep neural network (DNN)-based models, DeepSurv and DeepHit, to predict DFS and OS. The significant prognostic factors identified by our models were consistent with clinical literature, with post-operative pathologic features showing higher significance than clinical stage features. DeepSurv and DeepHit demonstrated comparable discriminative accuracy to CoxPH, with DeepSurv slightly outperforming in both DFS and OS prediction tasks, achieving C-index of 0.735 and 0.74, respectively. While these results suggested the potential of DNNs as prognostic tools for improving predictive accuracy and providing personalised guidance with respect to risk stratification, CoxPH still remains an adequately good prediction model, with the data used in this study.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Europe > Ireland > Leinster > County Dublin > Dublin (0.14)
- North America > United States > New York (0.04)
- (4 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Therapeutic Area > Otolaryngology (1.00)
- Health & Medicine > Therapeutic Area > Oncology > Head & Neck Cancer (0.88)
CPLIP: Zero-Shot Learning for Histopathology with Comprehensive Vision-Language Alignment
Javed, Sajid, Mahmood, Arif, Ganapathi, Iyyakutti Iyappan, Dharejo, Fayaz Ali, Werghi, Naoufel, Bennamoun, Mohammed
This paper proposes Comprehensive Pathology Language Image Pre-training (CPLIP), a new unsupervised technique designed to enhance the alignment of images and text in histopathology for tasks such as classification and segmentation. This methodology enriches vision-language models by leveraging extensive data without needing ground truth annotations. CPLIP involves constructing a pathology-specific dictionary, generating textual descriptions for images using language models, and retrieving relevant images for each text snippet via a pre-trained model. The model is then fine-tuned using a many-to-many contrastive learning method to align complex interrelated concepts across both modalities. Evaluated across multiple histopathology tasks, CPLIP shows notable improvements in zero-shot learning scenarios, outperforming existing methods in both interpretability and robustness and setting a higher benchmark for the application of vision-language models in the field. To encourage further research and replication, the code for CPLIP is available on GitHub at https://cplip.github.io/
- Oceania > Australia > Western Australia (0.04)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- Europe > Austria (0.04)
- (2 more...)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.67)
Artificial intelligence based prediction on lung cancer risk factors using deep learning
Sohaib, Muhammad, Adewunmi, Mary
In this proposed work, we identified the significant research issues on lung cancer risk factors. Capturing and defining symptoms at an early stage is one of the most difficult phases for patients. Based on the history of patients records, we reviewed a number of current research studies on lung cancer and its various stages. We identified that lung cancer is one of the significant research issues in predicting the early stages of cancer disease. This research aimed to develop a model that can detect lung cancer with a remarkably high level of accuracy using the deep learning approach (convolution neural network). This method considers and resolves significant gaps in previous studies. We compare the accuracy levels and loss values of our model with VGG16, InceptionV3, and Resnet50. We found that our model achieved an accuracy of 94% and a minimum loss of 0.1%. Hence physicians can use our convolution neural network models for predicting lung cancer risk factors in the real world. Moreover, this investigation reveals that squamous cell carcinoma, normal, adenocarcinoma, and large cell carcinoma are the most significant risk factors. In addition, the remaining attributes are also crucial for achieving the best performance.
- Asia > China > Tianjin Province > Tianjin (0.05)
- Oceania > Australia > Tasmania > Hobart (0.04)
- North America > United States (0.04)
- (4 more...)
- Research Report > New Finding (0.47)
- Research Report > Experimental Study (0.46)